4 research outputs found
Verifying And Interpreting Neural Networks using Finite Automata
Verifying properties and interpreting the behaviour of deep neural networks
(DNN) is an important task given their ubiquitous use in applications,
including safety-critical ones, and their blackbox nature. We propose an
automata-theoric approach to tackling problems arising in DNN analysis. We show
that the input-output behaviour of a DNN can be captured precisely by a
(special) weak B\"uchi automaton of exponential size. We show how these can be
used to address common verification and interpretation tasks like adversarial
robustness, minimum sufficient reasons etc. We report on a proof-of-concept
implementation translating DNN to automata on finite words for better
efficiency at the cost of losing precision in analysis
We Cannot Guarantee Safety: The Undecidability of Graph Neural Network Verification
Graph Neural Networks (GNN) are commonly used for two tasks: (whole) graph
classification and node classification. We formally introduce generically
formulated decision problems for both tasks, corresponding to the following
pattern: given a GNN, some specification of valid inputs, and some
specification of valid outputs, decide whether there is a valid input
satisfying the output specification. We then prove that graph classifier
verification is undecidable in general, implying that there cannot be an
algorithm surely guaranteeing the absence of misclassification of any kind.
Additionally, we show that verification in the node classification case becomes
decidable as soon as we restrict the degree of the considered graphs.
Furthermore, we discuss possible changes to these results depending on the
considered GNN model and specifications
Reachability In Simple Neural Networks
We investigate the complexity of the reachability problem for (deep) neural
networks: does it compute valid output given some valid input? It was recently
claimed that the problem is NP-complete for general neural networks and
specifications over the input/output dimension given by conjunctions of linear
inequalities. We recapitulate the proof and repair some flaws in the original
upper and lower bound proofs. Motivated by the general result, we show that
NP-hardness already holds for restricted classes of simple specifications and
neural networks. Allowing for a single hidden layer and an output dimension of
one as well as neural networks with just one negative, zero and one positive
weight or bias is sufficient to ensure NP-hardness. Additionally, we give a
thorough discussion and outlook of possible extensions for this direction of
research on neural network verification.Comment: arXiv admin note: substantial text overlap with arXiv:2108.1317